Prominent tech scholar: AI ‘feels like a runaway train that we’re chasing on foot’ 您所在的位置:网站首页 scholar google Prominent tech scholar: AI ‘feels like a runaway train that we’re chasing on foot’

Prominent tech scholar: AI ‘feels like a runaway train that we’re chasing on foot’

#Prominent tech scholar: AI ‘feels like a runaway train that we’re chasing on foot’| 来源: 网络整理| 查看: 265

Professor Rudin, would you begin by telling which clinical use cases you’re especially excited about within healthcare AI?

Right now I’m working on computer-aided mammography, neurology for critically ill patients and analytics for wearable devices for heart monitoring (think of a smartwatch). I’m excited about the possibility of detecting arrhythmia much faster than ever before (used to be that you might not detect it until the person was at the hospital for some terrible situation).

A lot of my work is on techniques, which are cross-cutting, so you can use it across domains. We’re designing techniques for building medical scoring systems, which are tiny little formulas that look like someone might have created them by hand (they could fit on an index card), but they are actually deceivingly accurate machine learning models. Those can be used in almost any area of medicine.

For the mammography project, we’re building interpretable neural networks, and I’m excited that these are as accurate as black box neural networks, so they can replace black box networks across any clinical domain that uses images.

Are there any clinical applications over which you have strong misgivings?

I really don’t like what people are doing with explaining black boxes and telling everyone they actually understand what the black box is doing, because they don’t! There have been more than a couple cases where FDA-approved models went wrong, and because they are black box, no one really knows why. If you care, and if it’s important, build an interpretable model.

Everyone wants to use ChatGPT for medicine. Given that it provides wrong answers that it’s confident in, I don’t know why you’d want to trust it for high-stakes decisions. Just because it’s amazingly cool does not make it trustworthy.

Any thoughts on nonclinical uses of AI in healthcare—billing and coding, medical debt collections, administrative efficiencies and so on?

It’s a good idea to use AI to find patterns that might be important, but if it is something that matters, a human should be making the final decision. I know there were some articles about debt collections that were totally automated and not designed very well, which prevented people from accessing a human to sort out that issue. There were also algorithms, again not designed very well, that provided service in a racially biased way because they predicted the cost of healthcare as a proxy for how sick the patient was (which is a racially biased estimate)

The problem is that AI gets implemented way too soon. People get excited about what it can do and push it out there when it’s not very good and hasn’t been tested properly. It would be lovely, though, if the AI could fill in repetitive forms for us.



【本文地址】

公司简介

联系我们

今日新闻

    推荐新闻

      专题文章
        CopyRight 2018-2019 实验室设备网 版权所有